建模长期依赖关系对于理解计算机视觉中的任务至关重要。尽管卷积神经网络(CNN)在许多视觉任务中都表现出色,但由于它们通常由当地核层组成,因此它们仍然限制捕获长期结构化关系。但是,完全连接的图(例如变形金刚中的自我发项操作)对这种建模是有益的,但是,其计算开销非常有用。在本文中,我们提出了一个动态图形消息传递网络,与建模完全连接的图形相比,该网络大大降低了计算复杂性。这是通过在图表中自适应采样节点(以输入为条件)来实现的,以传递消息传递。基于采样节点,我们动态预测节点依赖性滤波器权重和亲和力矩阵,以在它们之间传播信息。这种公式使我们能够设计一个自我发挥的模块,更重要的是,我们将基于变压器的新骨干网络用于图像分类预处理,并用于解决各种下游任务(对象检测,实例和语义细分)。使用此模型,我们在四个不同任务上的强,最先进的基线方面显示出显着改进。我们的方法还优于完全连接的图形,同时使用较少的浮点操作和参数。代码和型号将在https://github.com/fudan-zvg/dgmn2上公开提供。
translated by 谷歌翻译
最近,机器学习(ML)电位的发展使得以量子力学(QM)模型的精度进行大规模和长期分子模拟成为可能。但是,对于高水平的QM方法,例如在元gga级和/或具有精确交换的密度函数理论(DFT),量子蒙特卡洛等,生成足够数量的用于训练的数据由于其高成本,计算挑战性。在这项工作中,我们证明了基于ML的DFT模型Deep Kohn-Sham(Deepks)可以在很大程度上缓解这个问题。 DeepKS采用计算高效的基于神经网络的功能模型来构建在廉价DFT模型上添加的校正项。在训练后,DeepKs提供了与高级QM方法相比,具有紧密匹配的能量和力,但是所需的训练数据的数量是比训练可靠的ML潜力所需的数量级要小。因此,DeepKs可以用作昂贵的QM型号和ML电位之间的桥梁:一个人可以生成相当数量的高准确性QM数据来训练DeepKs模型,然后使用DeepKs型号来标记大量的配置以标记训练ML潜力。该周期系统方案在DFT软件包算盘中实施,该计划是开源的,可以在各种应用程序中使用。
translated by 谷歌翻译
传统的生物和制药工厂由人类工人或预定义阈值控制。现代化的工厂具有高级过程控制算法,例如模型预测控制(MPC)。但是,几乎没有探索将深入的增强学习来控制制造厂。原因之一是缺乏高保真模拟和基准测试的标准API。为了弥合这一差距,我们开发了一个易于使用的库,其中包括五个高保真模拟环境:BeerfMtenV,Reactorenv,Atropineenv,Pensimenv和Mabenv,涵盖了广泛的制造过程。我们在已发布的动态模型上构建这些环境。此外,我们在线和离线基准基准,基于模型和无模型的强化学习算法,用于比较后续研究。
translated by 谷歌翻译
非参数两样本测试(TST)判断是否从同一分布中得出两组样本,已广泛用于关键数据的分析中。人们倾向于使用TST作为可信赖的基本工具,并且很少对其可靠性有任何疑问。本文系统地通过对抗攻击系统地揭示了非参数TST的故障模式,然后提出了相应的防御策略。首先,我们从理论上表明,对手可以在分配变化上限制,从而保证了攻击的隐形性。此外,我们从理论上发现,对手也可以降低TST测试能力的下限,这使我们能够迭代地最小化测试标准,以便搜索对抗对。为了启用TST不足的攻击,我们提出了一个合奏攻击(EA)框架,共同将不同类型的测试标准最小化。其次,为了鲁棒性TST,我们提出了一种最大值优化,它可以迭代地生成对抗对来训练深核。对模拟和现实世界数据集进行的广泛实验验证了非参数TST的对抗脆弱性以及我们提出的防御的有效性。源代码可从https://github.com/godxuxilie/robust-tst.git获得。
translated by 谷歌翻译
面对面对话期间的响应声是社会互动的关键要素,在心理学研究中得到了很好的建立。通过非言语信号响应扬声器的话语,语调或行为实时,听众展示了它们如何从事对话。在这项工作中,我们构建了响应声侦听器数据集(RLD),从公共资源收集的对话视频语料库,其中包括67个扬声器,76个听众,具有三种不同的态度。我们将响应声聆听头生成任务定义为具有运动的运动和表达式的非言语头的合成,包括扬声器的音频和视觉信号。与言语驱动的手势或谈话主管不同,我们在这项任务中介绍了更多的模态,希望有利于几个研究领域,包括人类互动,视频到视频转换,跨模型理解和生成。此外,我们释放了一种态度调节的听力头生成基线。项目页面:\ url {https://project.mhzhou.com/rld}。
translated by 谷歌翻译
用声明知识(RDK)和顺序决策(SDM)推理是人工智能的两个关键研究领域。RDK方法的原因是具有声明领域知识,包括常识性知识,它是先验或随着时间的收购,而SDM方法(概率计划和强化学习)试图计算行动政策,以最大程度地提高时间范围内预期的累积效用;两类方法的原因是存在不确定性。尽管这两个领域拥有丰富的文献,但研究人员尚未完全探索他们的互补优势。在本文中,我们调查了利用RDK方法的算法,同时在不确定性下做出顺序决策。我们讨论重大发展,开放问题和未来工作的方向。
translated by 谷歌翻译
Adversarial training based on the minimax formulation is necessary for obtaining adversarial robustness of trained models. However, it is conservative or even pessimistic so that it sometimes hurts the natural generalization. In this paper, we raise a fundamental question-do we have to trade off natural generalization for adversarial robustness? We argue that adversarial training is to employ confident adversarial data for updating the current model. We propose a novel formulation of friendly adversarial training (FAT): rather than employing most adversarial data maximizing the loss, we search for least adversarial data (i.e., friendly adversarial data) minimizing the loss, among the adversarial data that are confidently misclassified. Our novel formulation is easy to implement by just stopping the most adversarial data searching algorithms such as PGD (projected gradient descent) early, which we call early-stopped PGD. Theoretically, FAT is justified by an upper bound of the adversarial risk. Empirically, early-stopped PGD allows us to answer the earlier question negatively-adversarial robustness can indeed be achieved without compromising the natural generalization.* Equal contribution † Preliminary work was done during an internship at RIKEN AIP.
translated by 谷歌翻译
The combination of conduct, emotion, motivation, and thinking is referred to as personality. To shortlist candidates more effectively, many organizations rely on personality predictions. The firm can hire or pick the best candidate for the desired job description by grouping applicants based on the necessary personality preferences. A model is created to identify applicants' personality types so that employers may find qualified candidates by examining a person's facial expression, speech intonation, and resume. Additionally, the paper emphasises detecting the changes in employee behaviour. Employee attitudes and behaviour towards each set of questions are being examined and analysed. Here, the K-Modes clustering method is used to predict employee well-being, including job pressure, the working environment, and relationships with peers, utilizing the OCEAN Model and the CNN algorithm in the AVI-AI administrative system. Findings imply that AVIs can be used for efficient candidate screening with an AI decision agent. The study of the specific field is beyond the current explorations and needed to be expanded with deeper models and new configurations that can patch extremely complex operations.
translated by 谷歌翻译
Model Predictive Controllers (MPC) are widely used for controlling cyber-physical systems. It is an iterative process of optimizing the prediction of the future states of a robot over a fixed time horizon. MPCs are effective in practice, but because they are computationally expensive and slow, they are not well suited for use in real-time applications. Overcoming the flaw can be accomplished by approximating an MPC's functionality. Neural networks are very good function approximators and are faster compared to an MPC. It can be challenging to apply neural networks to control-based applications since the data does not match the i.i.d assumption. This study investigates various imitation learning methods for using a neural network in a control-based environment and evaluates their benefits and shortcomings.
translated by 谷歌翻译
Abstractive dialogue summarization has received increasing attention recently. Despite the fact that most of the current dialogue summarization systems are trained to maximize the likelihood of human-written summaries and have achieved significant results, there is still a huge gap in generating high-quality summaries as determined by humans, such as coherence and faithfulness, partly due to the misalignment in maximizing a single human-written summary. To this end, we propose to incorporate different levels of human feedback into the training process. This will enable us to guide the models to capture the behaviors humans care about for summaries. Specifically, we ask humans to highlight the salient information to be included in summaries to provide the local feedback , and to make overall comparisons among summaries in terms of coherence, accuracy, coverage, concise and overall quality, as the global feedback. We then combine both local and global feedback to fine-tune the dialog summarization policy with Reinforcement Learning. Experiments conducted on multiple datasets demonstrate the effectiveness and generalization of our methods over the state-of-the-art supervised baselines, especially in terms of human judgments.
translated by 谷歌翻译